princess margaret cancer centre
AI outperforms humans in creating cancer treatments, but do doctors trust it?
The impact of deploying Artificial Intelligence (AI) for radiation cancer therapy in a real-world clinical setting has been tested by Princess Margaret researchers in a unique study involving physicians and their patients. A team of researchers directly compared physician evaluations of radiation treatments generated by an AI machine learning (ML) algorithm to conventional radiation treatments generated by humans. They found that in the majority of the 100 patients studied, treatments generated using ML were deemed to be clinically acceptable for patient treatments by physicians. Overall, 89% of ML-generated treatments were considered clinically acceptable for treatments, and 72% were selected over human-generated treatments in head-to-head comparisons to conventional human-generated treatments. Moreover, the ML radiation treatment process was faster than the conventional human-driven process by 60%, reducing the overall time from 118 hours to 47 hours.
Scientists voice concerns, call for transparency and reproducibility in AI research
In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. "Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from," says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. "But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress." The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an artificial intelligence (AI) system could outperform human radiologists in both robustness and speed for breast cancer screening.
- North America > Canada > Ontario > Toronto (0.61)
- North America > United States > Massachusetts (0.26)
Can We Trust AI Doctors? Google Health and Academics Battle It Out
Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright. Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms--"black boxes" that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives.
Researchers call for transparency and reproducibility in artificial intelligence research
International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients. In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from. But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress."
- North America > Canada > Ontario > Toronto (0.60)
- North America > United States > Massachusetts (0.26)
Scientists voice concerns, call for transparency and reproducibility in AI research
IMAGE: Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre, a part of University Health Network, is first author on the article published in October's issue of Nature. TORONTO, CANADA ---International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients. In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications. "Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from," says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. "But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress."
- North America > Canada > Ontario > Toronto (0.88)
- North America > United States > Massachusetts (0.25)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)